Goto

Collaborating Authors

 calibration and consistency



Calibration and Consistency of Adversarial Surrogate Losses

Neural Information Processing Systems

Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But, which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the $\mathcal{H}$-calibration and $\mathcal{H}$-consistency of adversarial surrogate losses. We show that convex loss functions, or the supremum-based convex losses often used in applications, are not $\mathcal{H}$-calibrated for common hypothesis sets used in machine learning.




Calibration and Consistency of Adversarial Surrogate Losses

Neural Information Processing Systems

Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But, which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the \mathcal{H} -calibration and \mathcal{H} -consistency of adversarial surrogate losses. We show that convex loss functions, or the supremum-based convex losses often used in applications, are not \mathcal{H} -calibrated for common hypothesis sets used in machine learning.